Runtime Stealthy Perception Attacks against DNN-based Adaptive Cruise Control Systems

Authors: Xugui Zhou, Anqi Chen, Maxfield Kouzel, Haotian Ren, Morgan McCarty, Cristina Nita-Rotaru, Homa Alemzadeh

For class EE/CSC 7700 ML for CPS

Instructor: Dr. Xugui Zhou

Presentation by Group 6: Yuhong Wang

Time of Presentation:10:30 AM, Friday, October 25, 2024

Blog post by Group 1: Joshua McCain, Josh Rovira, Lauren Bristol

Link to Paper:

https://arxiv.org/abs/2307.08939

Slide Outlines

Summary of the Paper

The authors of this paper establish a new kind of adversarial patch attack they call CA-Opt. This attack focuses on causing small perturbations in the camera images of autonomous vehicles to cause vast increases in the loss of DNN-based autonomous driving models. By assuming attackers had access to the ACC system in use, the authors used this manipulation of the system to prove that CA-Opt can cause car crashes in a variety of situations while remaining undetected by safety mechanisms and human drivers alike. The simulation they used implemented OpenPilot and CARLA to better model real-world scenarios and show that this input attack on autonomous driving systems is stealthier and potentially deadlier than standard baseline methods dealing with command control manipulations and choosing random variables to initialize optimization.

Title Slide

Title Image

The title slides introduces the paper being presented and presents the authors of the paper, including Dr. Zhou.

Introduction

Introduction Image

This slide is an introduction explaining the method of attack against DNN-based adaptive cruise control systems. These runtime attacks cannot be mitigated by human interactions and thus could be a major security risk.

Key Concepts

Key Concepts Image

Some key concepts, specifically the systems being attacked, are Adaptive Cruise Control (ACC) and Advanced Driver Assistance Systems (ADAS). ACC pertains to adjusting the speed of the vehicle based on the lead vehicle. ADAS focuses on staying within road lanes and collision avoidance. Some examples are given of each type.

Key Concepts Continued

Key Concepts Continued Image

Additional key concepts include context-aware strategies and adverserial patches. Context-aware strategies in autonomous driving makes critical adaptive decisions based on the dynamic, changing environment cars are in. An adversarial patch in this case focuses on making small image perturbations to cause extra loss in the machine learning model. This will cause unsafe, incorrect predictions.

ADAS Architecture

ADAS Architecture Image

The image provided shows how the attacker can target DNN inputs (the red 1 in the diagram) or the DNN outputs (the red 2) with the adversarial patches. This paper focuses on input attacks.

Attack Model

Attack Model Image

For this paper's attack, the goal is to remain undetected while heightening error in forward vehicle predictions. Subsequently, the paper assumes the attacker has knowledge of the ACC system of the vehicle. Additionally, the attack would have the ability to modify camera images in real-time.

Attack Challenges

Attack Challenges Image

The paper's goals with the attack are to identify the optimal time to attack for maximum loss at runtime, create an attack value to adapt to the dynamic environment, and successfully implement the attack according to the real-time constraints.

Attack Design

Attack Design Image

Addressing challenge 1, instead of using random attack parameters, the paper focuses on using specific parameters for the attack start time and duration to achieve the best loss.

Attack Design Continued

Attack Design Continued Image

For challenges 2 and 3, an adaptive algorithm is used to dynamically determine the best pixel values for the adversarial patch to best disrupt the autonomous system.

Attack Design Algorithm

Attack Design Algorithm Image

The slide presents an algorithm for generating the adverserial patch. At (1), the objective function is defined to minimize detection and maximize loss. Then, at (2), the Patch_t perturbation is generated. At (3) and (4), it is specified that the patch must remain within a bounding box of the detected leading vehicle to maximize accuracy. At (5), adversarial image X-adv is generated from adding the adversarial patch to the original image. At (6), the algorithm updates the relative distance and speed with the adversarial image as input. At (7), malicious control commands are generated with the ACC system. Finally, at (8), the vehicle state is updated.

Attack Design Equations

Attack Design Equations Image

Specifically for challenge 3, these equations update the adverserial patch position, defines how to initialize the new patch, and finalized the initialization through mask M, respectively.

Attack in Action

Attack in Action Image

This slide shows how the previous equations shift and enlarge the the patch area to demonstrate how the attack is put in motion in terms of the leading vehicle.

Safety Intervention Simulation

Safety Intervention Simulation Image

The diagram shows OpenPilot being integrated with CARLA simulation in order to simulate the real-world ADAS system under attack for demonstration of the paper's attacks.

Simulation Explanation

Simulation Explanation Image

The previous diagram is explained as showing how there was an enhancement of OpenPilot simulation for this paper, integrating 3 safety intervention levels and the bettering the AEBS simulator.

Simulation Explanation Continued

Simulation Explanation Continued Image

By integrating OpenPilot and CARLA, the paper prioritizes safety intervention and control commands. This is so that the simulation can better represent real-world ADAS systems. Fusing camera and radar data was also done for this purpose.

Three Level Safety Intervention

Three Level Safety Intervention Image

The three levels of safety integrated into OpenPilot were the ADAS safety features, vehicle constraint checks for control commands, and driver interventions. By intergrating these features, OpenPilot could better simulate a real-world vehicle situation where an attack may occur.

AEBS Design

AEBS Design Image

To design and test AEBS, a time-to-collision (TTC) control method was implemented. When this TTC value, evaluated in the first function, goes below the t_fcw, t_pb1, t_pb2, or t_fb values, new break values are reached as demonstrated by the associated diagram.

Control Command Dispatcher

Control Command Dispatcher Image

Because there may be many safety mechanisms, control commands from the OpenPilot ACC controller may start to disagree with safety interventions. A command dispatcher communicating with CARLA was created to determine priority among the commands.

Simulation Research Questions

Simulation Research Questions Image

The testing success of this new attack relies on the presented research questions. (1) Will strategic attack time and value selection increase hazard? (2) Can the attack evasion techniques prolong attack effectiveness? (3) Does this input attack have better performance than direct control tampering?

Baseline Strategies

Baseline Strategies Image

Baseline attack strategies are defined for this paper's experiments, such as CA-Random and CA-APGD, to compare against this paper's attack, CA-Opt.

Attack Effectiveness

Attack Effectiveness Image

For research question 1, CA-Opt achieves 100% success, which is revealed to be much better than all baseline tests. CA-Opt is thus proven to be a very effective attack strategy.

Attack Stealthiness

Attack Stealthiness Image

For all three tests performed, CA-Opt achieves at least 99.2% success in evasion of detection, thus proving that the perturbations are largely undetectable while still being relatively small themselves.

Output vs. Input Attack Comparision

Output vs. Input Attack Comparision Image

The given graphs show that CA-Opt changing the input only causes subtle changes in the DNN predictions. Along with the attack not modifying control outputs directly, it is very difficult to detect by safety mechanisms and human drivers. This is different than direct tampering with output controls, which are easier to detect.

Real World Evaluation

Real World Evaluation Image

Across 12 performed tests, CA-Opt was successful in causing a crash in all scenarios. This demonstrates that the attack works in a variety of camera and sensor scenarios, thus proving the great effectiveness of CA-Opt.


Discussions

Discussion 1: What other locations can be attacked (by CA-Opt)?

Discussion 2: What defense mechanism can be used against these adversarial patch attacks?


Questions

Dr. Zhou: Are there other CPS sections in your presentation that describe what other systems could be attacked by CA-Opt?

We believe that CA-Opt can be used to attack things like GPS system data or even be applied to some DNN output to further confuse the autonomous driving system.

Q2: Group 3 asked: How do you extract the functions needed for this kind of attack and have it for real-time execution?

Dr. Zhou: We assume that the attacker already has access to these functions that are needed for the attack. This is actually relatively simple with open source autonomous driving systems like OpenPilot. We do not assume the attacker has access to the DNN model itself. If an attacker wants, the open source developers post all of the information on their website. Attackers could even rent a vehicle of the target's autonomous driving type and reverse engineer what they need from the software.